A second-order differentiable smoothing approximation lower order exact penalty function
نویسندگان
چکیده
In this paper, we give a smoothing approximation to the lower order exact penalty functions for inequality-constrained optimization problems. Error estimations are obtained among the optimal objective function values of the smoothed penalty problem, of the nonsmooth penalty problem and of the original optimization problem. An algorithm based on the smoothed penalty function is presented, which is shown to be globally convergent under some mild conditions. Numerical examples are given to illustrate the effectiveness of the present smoothing method.
منابع مشابه
Differentiable Exact Penalty Functions for Nonlinear Second-Order Cone Programs
We propose a method for solving nonlinear second-order cone programs (SOCPs), based on a continuously differentiable exact penalty function. The construction of the penalty function is given by incorporating a multipliers estimate in the augmented Lagrangian for SOCPs. Under the nondegeneracy assumption and the strong second-order sufficient condition, we show that a generalized Newton method h...
متن کاملSl1QP Based Algorithm with Trust Region Technique for Solving Nonlinear Second-Order Cone Programming Problems
In this paper, we propose an algorithm based on Fletcher’s Sl1QP method and the trust region technique for solving Nonlinear Second-Order Cone Programming (NSOCP) problems. The Sl1QP method was originally developed for nonlinear optimization problems with inequality constraints. It converts a constrained optimization problem into an unconstrained problem by using the l1 exact penalty function, ...
متن کاملSmoothed Lower Order Penalty Function for Constrained Optimization Problems
The paper introduces a smoothing method to the lower order penalty function for constrained optimization problems. It is shown that, under some mild conditions, an optimal solution of the smoothed penalty problem is an approximate optimal solution of the original problem. Based on the smoothed penalty function, an algorithm is presented and its convergence is proved under some mild assumptions....
متن کاملContinuous Discrete Variable Optimization of Structures Using Approximation Methods
Optimum design of structures is achieved while the design variables are continuous and discrete. To reduce the computational work involved in the optimization process, all the functions that are expensive to evaluate, are approximated. To approximate these functions, a semi quadratic function is employed. Only the diagonal terms of the Hessian matrix are used and these elements are estimated fr...
متن کاملA Gauss-Newton Approach for Solving Constrained Optimization Problems Using Differentiable Exact Penalties
We propose a Gauss-Newton-type method for nonlinear constrained optimization using the exact penalty introduced recently by André and Silva for variational inequalities. We extend their penalty function to both equality and inequality constraints using a weak regularity assumption, and as a result, we obtain a continuously differentiable exact penalty function and a new reformulation of the KKT...
متن کامل